Backprop-Free Reinforcement Learning with Active Neural Generative Coding
نویسندگان
چکیده
In humans, perceptual awareness facilitates the fast recognition and extraction of information from sensory input. This largely depends on how human agent interacts with environment. this work, we propose active neural generative coding, a computational framework for learning action-driven models without backpropagation errors (backprop) in dynamic environments. Specifically, develop an intelligent that operates even sparse rewards, drawing inspiration cognitive theory planning as inference. We demonstrate several simple control problems our performs competitively deep Q-learning. The robust performance offers promising evidence backprop-free approach inference can drive goal-directed behavior.
منابع مشابه
Reinforcement Learning in Neural Networks: A Survey
In recent years, researches on reinforcement learning (RL) have focused on bridging the gap between adaptive optimal control and bio-inspired learning techniques. Neural network reinforcement learning (NNRL) is among the most popular algorithms in the RL framework. The advantage of using neural networks enables the RL to search for optimal policies more efficiently in several real-life applicat...
متن کاملReinforcement Learning in Neural Networks: A Survey
In recent years, researches on reinforcement learning (RL) have focused on bridging the gap between adaptive optimal control and bio-inspired learning techniques. Neural network reinforcement learning (NNRL) is among the most popular algorithms in the RL framework. The advantage of using neural networks enables the RL to search for optimal policies more efficiently in several real-life applicat...
متن کاملGenerative Adversarial Active Learning
We propose a new active learning by query synthesis approach using Generative Adversarial Networks (GAN). Different from regular active learning, the resulting algorithm adaptively synthesizes training instances for querying to increase learning speed. We generate queries according to the uncertainty principle, but our idea can work with other active learning principles. We report results from ...
متن کاملDeep Generative Stochastic Networks Trainable by Backprop
We introduce a novel training principle for probabilistic models that is an alternative to maximum likelihood. The proposed Generative Stochastic Networks (GSN) framework is based on learning the transition operator of a Markov chain whose stationary distribution estimates the data distribution. The transition distribution of the Markov chain is conditional on the previous state, generally invo...
متن کاملReinforcement learning of recurrent neural network for temporal coding
We study a reinforcement learning for temporal coding with neural network consisting of stochastic spiking neurons. In neural networks, information can be coded by characteristics of the timing of each neuronal firing, including the order of firing or the relative phase differences of firing. We derive the learning rule for this network and show that the network consisting of Hodgkin-Huxley neu...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2022
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v36i1.19876